Skip to content

Comments

feat(chat): per-user thread isolation#13

Open
charliecreates[bot] wants to merge 11 commits intomainfrom
ai-11-task-implement-per-user-thread-isolation-sup
Open

feat(chat): per-user thread isolation#13
charliecreates[bot] wants to merge 11 commits intomainfrom
ai-11-task-implement-per-user-thread-isolation-sup

Conversation

@charliecreates
Copy link

@charliecreates charliecreates bot commented Feb 7, 2026

Implements strict per-user visibility for chat threads/messages by making Supabase the source of truth and blocking any thread/message access that isn’t owned by the authenticated user.

Resolves #11.

Changes

  • Enforce threads.user_id = auth user on all /api/tambo/threads/* operations (list/retrieve/update/generate-name/advancestream/delete).
  • Implement POST /threads/:id/messages and PUT /threads/:id/messages/:messageId/component-state against Supabase so the React SDK doesn’t fall back to proxying these to Tambo with the server API key.
  • Add Supabase RLS policies for threads + messages (select/insert/update/delete) to harden isolation at the database layer.

Verification

# Next build (includes typecheck): success
$ npm run build

# ESLint: fails (preexisting) with "TypeError: Converting circular structure to JSON"
$ npm run lint

reviewChanges skipped:

  • src/app/api/tambo/[...path]/route.ts: suggestions about deeper routing refactors / richer error payloads / additional logging are out of scope for fixing cross-user thread access.
  • src/app/api/tambo/[...path]/route.ts: cancel endpoint remains a no-op (returns true) to match current client expectations; ownership is now enforced.
  • src/app/api/tambo/[...path]/route.ts: messages.create returns { id } (callers don’t use the response today).

- Simplify Tambo base URL resolution to use server-side `TAMBO_URL` only
- Centralize missing `TAMBO_API_KEY` error message and reuse in SSE/proxy paths
- Remove incorrect `reasoning` cast from message mapping
- Strongly type `AdvanceStreamRequestBody` and validate parsed JSON
- Preserve and forward `availableComponents`, `forceToolChoice`, and `toolCallCounts` to Tambo
- Stop overriding `created_at` when inserting/upserting messages to keep DB timestamps authoritative
- Normalize SSE line endings (`
` → `
`) before parsing event lines
- Disable auto thread name generation in `ChatClient` to avoid conflicts with server-side naming
- Extend middleware matcher to cover `/api/tambo/:path*` so Supabase auth/session middleware runs for Tambo API calls
@charliecreates charliecreates bot requested a review from CharlieHelps February 7, 2026 17:56
Copy link
Author

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Key endpoints in src/app/api/tambo/[...path]/route.ts rely on RLS for tenant isolation but do not scope queries by user_id, creating a high-risk footgun if policies are ever misconfigured. The SSE streaming/persistence flow can leave inconsistent state (persisting user messages even when the upstream request fails) and may duplicate messages due to unstable ID mapping. message-suggestions.tsx appears to disable generated suggestions entirely after the thread starts, likely a regression. The migration contains destructive deletes that should not run automatically in production environments.

Summary of changes

Summary

This PR updates the app to proxy all Tambo API traffic through a new authenticated Next.js route and introduces per-user thread/message persistence in Supabase.

Key changes

  • Tambo proxy + persistence API: adds src/app/api/tambo/[...path]/route.ts implementing:
    • authenticated /api/tambo proxy to Tambo with server-side TAMBO_API_KEY
    • local handlers for thread list/retrieve/update/name generation
    • SSE /threads/advancestream passthrough that persists streamed messages to Supabase
  • Per-user thread isolation:
    • adds migration supabase/migrations/20260207_per_user_threads.sql creating threads and messages tables with RLS policies
  • Chat page split:
    • src/app/chat/page.tsx becomes a server component that enforces auth + key presence and renders a new client component src/app/chat/chat-client.tsx
  • Client configuration updates:
    • TamboProvider now uses tamboUrl="/api/tambo" and apiKey="unused" in README/chat/interactables
    • example.env.local switches from NEXT_PUBLIC_TAMBO_API_KEY to server-only TAMBO_API_KEY
  • Suggestions UX change:
    • message-suggestions.tsx replaces generated suggestions hook with setting the input value directly
  • Middleware matcher broadened:
    • includes /chat/:path* and /api/tambo/:path* so Supabase session middleware runs for those routes

Comment on lines 130 to 151
async function handleThreadsList(
supabase: Awaited<ReturnType<typeof createSupabaseServerClient>>,
userId: string,
) {
const { data, error } = await supabase
.from("threads")
.select("id, created_at, updated_at, name, metadata")
.order("updated_at", { ascending: false });

if (error) {
return jsonError(error.message, 500);
}

const items = (data as unknown as ThreadRow[]).map((row) =>
threadFromRow(row, userId),
);

return NextResponse.json({
items,
total: items.length,
count: items.length,
});
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handleThreadsList lists all threads (.from("threads")...order(...)) without filtering by user_id. You’re relying on RLS to prevent cross-user reads, but that means:

  • the endpoint’s behavior becomes tightly coupled to RLS being enabled and correct in every environment (including local/dev)
  • failures/misconfigurations become data leaks
  • queries can become more expensive than necessary because Postgres must apply policy filters

Given the stated goal of per-user thread isolation, you should also filter at the query level.

Suggestion

Update the query to explicitly scope by user_id.

const { data, error } = await supabase
  .from("threads")
  .select("id, created_at, updated_at, name, metadata")
  .eq("user_id", userId)
  .order("updated_at", { ascending: false });

Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.

Comment on lines 202 to 241
async function handleThreadUpdate(
request: Request,
supabase: Awaited<ReturnType<typeof createSupabaseServerClient>>,
userId: string,
threadId: string,
) {
const body = (await request.json().catch(() => null)) as
| { name?: string; metadata?: Record<string, unknown> }
| null;

if (!body) return jsonError("Invalid JSON body", 400);

const update: Record<string, unknown> = {};
if (typeof body.name === "string") update.name = body.name;
if (body.metadata && typeof body.metadata === "object") {
update.metadata = body.metadata;
}

if (Object.keys(update).length === 0) {
return jsonError("No valid fields to update", 400);
}

const { error } = await supabase
.from("threads")
.update(update)
.eq("id", threadId);

if (error) return jsonError(error.message, 500);

const { data: thread, error: readError } = await supabase
.from("threads")
.select("id, created_at, updated_at, name, metadata")
.eq("id", threadId)
.maybeSingle();

if (readError) return jsonError(readError.message, 500);
if (!thread) return jsonError("Not found", 404);

return NextResponse.json(threadFromRow(thread as unknown as ThreadRow, userId));
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handleThreadUpdate updates by id only. With RLS enabled it will likely be blocked for non-owners, but you should not depend on that as the only isolation boundary for an app-level API.

Also, as written, it updates the row even if the threadId doesn’t exist (no error), then re-reads and returns 404. That’s fine, but adding the user_id filter makes the update and read consistent and avoids revealing whether a thread exists for other users.

Suggestion

Scope the update and subsequent read to the user.

const { error } = await supabase
  .from("threads")
  .update(update)
  .eq("id", threadId)
  .eq("user_id", userId);

// ... later
const { data: thread, error: readError } = await supabase
  .from("threads")
  .select("id, created_at, updated_at, name, metadata")
  .eq("id", threadId)
  .eq("user_id", userId)
  .maybeSingle();

Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.

Comment on lines +370 to +435
const { data: historyRows, error: historyError } = await supabase
.from("messages")
.select(
[
"role",
"content",
"additional_context",
"component",
"tool_call_request",
"created_at",
].join(","),
)
.eq("thread_id", persistentThreadId)
.order("created_at", { ascending: true });

if (historyError) return jsonError(historyError.message, 500);

const { error: appendError } = await supabase.from("messages").insert({
id: crypto.randomUUID(),
thread_id: persistentThreadId,
role: messageToAppend.role,
content: messageToAppend.content,
additional_context: messageToAppend.additionalContext ?? null,
component_state: {},
component: messageToAppend.component ?? null,
tool_call_request: messageToAppend.toolCallRequest ?? null,
});

if (appendError) return jsonError(appendError.message, 500);

const initialMessages = (historyRows as any[]).map((m) => ({
role: m.role,
content: m.content,
additionalContext: m.additional_context ?? undefined,
component: m.component ?? undefined,
toolCallRequest: m.tool_call_request ?? undefined,
}));

const computeBody: Record<string, unknown> = {
contextKey: userId,
initialMessages,
messageToAppend,
clientTools: [],
};

if (body.availableComponents != null) {
computeBody.availableComponents = body.availableComponents;
}
if (typeof body.forceToolChoice === "string") {
computeBody.forceToolChoice = body.forceToolChoice;
}
if (body.toolCallCounts && typeof body.toolCallCounts === "object") {
computeBody.toolCallCounts = body.toolCallCounts;
}

const tamboResponse = await tamboSseFetch("/threads/advancestream", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify(computeBody),
signal: request.signal,
});

if (!tamboResponse.ok || !tamboResponse.body) {
const text = await tamboResponse.text().catch(() => "");
return jsonError(text || "Tambo request failed", tamboResponse.status);
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In handleAdvanceStream, you load historyRows before inserting messageToAppend, then send initialMessages (derived from historyRows) along with messageToAppend to Tambo. This seems intentional (history excludes the new message), but you also persist messageToAppend to DB first.

If the downstream Tambo call fails (network, auth, 5xx), you’ve already written the user message, but you return an error and never persist assistant/tool responses. This creates “dangling” user messages and can confuse clients on retry (duplicate user message, different assistant response, etc.).

Suggestion

Consider making persistence atomic-ish:

Option A (simplest): insert messageToAppend after confirming the Tambo request is accepted (tamboResponse.ok) and you have a body.

Option B: insert it first but mark it with metadata like { pending: true }, and clear that flag when streaming completes/persists.

Option C: wrap DB inserts/updates in a Postgres function/transaction via RPC so the thread+message+final upserts can be applied consistently.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit implementing Option A (defer insert until after successful Tambo response).

Comment on lines +516 to +564
buffer += decoder.decode(value, { stream: true }).replaceAll("\r\n", "\n");

while (true) {
const nl = buffer.indexOf("\n");
if (nl === -1) break;

const rawLine = buffer.slice(0, nl).trim();
buffer = buffer.slice(nl + 1);

if (!rawLine) continue;
if (rawLine === "data: DONE") {
pendingDone = true;
continue;
}
if (rawLine.startsWith("error: ")) {
controller.enqueue(encoder.encode(`${rawLine}\n`));
continue;
}

const jsonStr = rawLine.startsWith("data: ") ? rawLine.slice(6) : rawLine;
if (!jsonStr) continue;

let chunk: any;
try {
chunk = JSON.parse(jsonStr);
} catch {
continue;
}

const dto = chunk?.responseMessageDto;
if (dto && typeof dto === "object") {
const originalMessageId = typeof dto.id === "string" ? dto.id : null;
if (originalMessageId) {
const mapped = messageIdMap.get(originalMessageId) ?? crypto.randomUUID();
messageIdMap.set(originalMessageId, mapped);
dto.id = mapped;

finalMessages.set(mapped, {
...dto,
threadId: persistentThreadId,
});
}

dto.threadId = persistentThreadId;
}

const outLine = `data: ${JSON.stringify(chunk)}\n`;
controller.enqueue(encoder.encode(outLine));
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SSE parser uses rawLine = buffer.slice(0, nl).trim(). Trimming is risky for SSE:

  • it can remove meaningful leading spaces in data: payloads (rare but valid)
  • it can change empty data: lines semantics
  • it can also collapse lines that should be forwarded as-is

Additionally, SSE events can have multiple data: lines per event; this implementation treats each line independently and will silently drop/mangle multi-line events.

Suggestion

Avoid trim() and implement minimal SSE framing:

  • read lines verbatim
  • accumulate data: lines until a blank line, then emit one event

At minimum, change to:

const rawLine = buffer.slice(0, nl);

and handle \r separately if needed.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit with a safer SSE line parser (multi-line data: support).

Comment on lines +387 to +472
const { error: appendError } = await supabase.from("messages").insert({
id: crypto.randomUUID(),
thread_id: persistentThreadId,
role: messageToAppend.role,
content: messageToAppend.content,
additional_context: messageToAppend.additionalContext ?? null,
component_state: {},
component: messageToAppend.component ?? null,
tool_call_request: messageToAppend.toolCallRequest ?? null,
});

if (appendError) return jsonError(appendError.message, 500);

const initialMessages = (historyRows as any[]).map((m) => ({
role: m.role,
content: m.content,
additionalContext: m.additional_context ?? undefined,
component: m.component ?? undefined,
toolCallRequest: m.tool_call_request ?? undefined,
}));

const computeBody: Record<string, unknown> = {
contextKey: userId,
initialMessages,
messageToAppend,
clientTools: [],
};

if (body.availableComponents != null) {
computeBody.availableComponents = body.availableComponents;
}
if (typeof body.forceToolChoice === "string") {
computeBody.forceToolChoice = body.forceToolChoice;
}
if (body.toolCallCounts && typeof body.toolCallCounts === "object") {
computeBody.toolCallCounts = body.toolCallCounts;
}

const tamboResponse = await tamboSseFetch("/threads/advancestream", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify(computeBody),
signal: request.signal,
});

if (!tamboResponse.ok || !tamboResponse.body) {
const text = await tamboResponse.text().catch(() => "");
return jsonError(text || "Tambo request failed", tamboResponse.status);
}

const encoder = new TextEncoder();
const decoder = new TextDecoder();

const messageIdMap = new Map<string, string>();
const finalMessages = new Map<string, any>();

let didPersist = false;
const persistMessages = async () => {
if (didPersist) return;
didPersist = true;

if (finalMessages.size > 0) {
const rows = Array.from(finalMessages.values()).map((m) => ({
id: m.id,
thread_id: persistentThreadId,
role: m.role,
content: m.content,
component_state: m.componentState ?? {},
additional_context: m.additionalContext ?? null,
component: m.component ?? null,
tool_call_request: m.toolCallRequest ?? null,
tool_calls: m.tool_calls ?? null,
tool_call_id: m.tool_call_id ?? null,
parent_message_id: m.parentMessageId ?? null,
reasoning: m.reasoning ?? null,
reasoning_duration_ms: m.reasoningDurationMS ?? null,
error: m.error ?? null,
is_cancelled: m.isCancelled ?? false,
metadata: m.metadata ?? null,
}));

const { error } = await supabase.from("messages").upsert(rows);
if (error) {
throw new Error(error.message);
}
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

persistMessages does upsert(rows) with no conflict target or dedupe logic shown. If the table has a PK on id (it does), that’s fine, but you’re mapping Tambo dto.id to random UUIDs and persisting those.

However, you also insert messageToAppend with a random UUID earlier. If Tambo also echoes the appended user message back as a response DTO, you may end up with two persisted copies of the same logical message (one from the initial insert, one from the streamed final upsert), since the IDs differ.

Suggestion

Introduce a stable ID strategy to prevent duplicates. Common options:

  • Pass your generated messageToAppend ID through to Tambo (if supported) and map it back.
  • Detect/skip persisting streamed DTOs that represent the user message you already inserted (e.g., compare role+content hash + created_at proximity + tool_call_id).
  • Store a mapping in metadata for the appended message linking tambo_original_id to your DB id.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit that persists tambo_original_id in messages.metadata and uses it to dedupe/upsert deterministically.

Comment on lines +489 to +564
const stream = new ReadableStream<Uint8Array>({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
try {
await persistMessages();
if (pendingDone) {
controller.enqueue(encoder.encode("data: DONE\n"));
}
} catch (error) {
console.error("Failed to persist streamed messages", {
error,
userId,
threadId: persistentThreadId,
messageCount: finalMessages.size,
});

controller.enqueue(
encoder.encode(
"error: Failed to persist conversation state, some messages may be missing.\n",
),
);
}
controller.close();
return;
}

buffer += decoder.decode(value, { stream: true }).replaceAll("\r\n", "\n");

while (true) {
const nl = buffer.indexOf("\n");
if (nl === -1) break;

const rawLine = buffer.slice(0, nl).trim();
buffer = buffer.slice(nl + 1);

if (!rawLine) continue;
if (rawLine === "data: DONE") {
pendingDone = true;
continue;
}
if (rawLine.startsWith("error: ")) {
controller.enqueue(encoder.encode(`${rawLine}\n`));
continue;
}

const jsonStr = rawLine.startsWith("data: ") ? rawLine.slice(6) : rawLine;
if (!jsonStr) continue;

let chunk: any;
try {
chunk = JSON.parse(jsonStr);
} catch {
continue;
}

const dto = chunk?.responseMessageDto;
if (dto && typeof dto === "object") {
const originalMessageId = typeof dto.id === "string" ? dto.id : null;
if (originalMessageId) {
const mapped = messageIdMap.get(originalMessageId) ?? crypto.randomUUID();
messageIdMap.set(originalMessageId, mapped);
dto.id = mapped;

finalMessages.set(mapped, {
...dto,
threadId: persistentThreadId,
});
}

dto.threadId = persistentThreadId;
}

const outLine = `data: ${JSON.stringify(chunk)}\n`;
controller.enqueue(encoder.encode(outLine));
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SSE correctness: outgoing stream missing blank line delimiter

Server-Sent Events require events to be terminated by a blank line (\n\n). This implementation enqueues data: ...\n and data: DONE\n but not the required extra newline. Some clients will buffer indefinitely or parse incorrectly.

Related: the error path enqueues error: ...\n which is not a standard SSE field (clients typically listen for event: + data:). If the Tambo client expects this custom error: prefix, fine—but the missing \n\n is still a protocol issue.

Suggestion

Terminate SSE events with a double newline:

  • For normal data chunks:
const outLine = `data: ${JSON.stringify(chunk)}\n\n`;
controller.enqueue(encoder.encode(outLine));
  • For DONE:
controller.enqueue(encoder.encode("data: DONE\n\n"));

If you also want standards-friendly errors, consider event: error\ndata: ...\n\n.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit fixing the SSE framing.

Comment on lines 474 to 481
const { error: threadUpdateError } = await supabase
.from("threads")
.update({ updated_at: new Date().toISOString() })
.eq("id", persistentThreadId);

if (threadUpdateError) {
throw new Error(threadUpdateError.message);
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Persisting threads.updated_at should not bypass DB authority

persistMessages() updates threads.updated_at by sending an explicit ISO string:

.update({ updated_at: new Date().toISOString() })

But the migration already adds a trigger (threads_set_updated_at) to set updated_at = now() on update. Setting the value from the app defeats that purpose and risks clock skew.

You can update a no-op field (or re-set name to itself) or update updated_at using a DB function (if you prefer), but best is to let the trigger handle the timestamp.

Suggestion

Remove app-supplied timestamps and rely on the trigger. For example, add a touch boolean/field, or just update a benign field:

const { error: threadUpdateError } = await supabase
  .from("threads")
  .update({})
  .eq("id", persistentThreadId);

If Supabase rejects empty updates, add a dedicated touched_at or last_activity_at column (recommended) and set it via now() in the DB.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit that removes the client-set updated_at and implements a safer touch mechanism.

Comment on lines +602 to +614
const headers = new Headers(request.headers);
headers.set("x-api-key", apiKey);
headers.delete("host");
headers.delete("content-length");

const body = request.body ? request.clone().body : undefined;

const response = await fetch(targetUrl, {
method: request.method,
headers,
body,
redirect: "manual",
});
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proxy security: forwarding cookie/authorization headers to Tambo

proxyToTambo() clones all incoming headers and adds x-api-key. That likely forwards cookie and possibly authorization to the upstream Tambo API. This is an unnecessary data leak (session cookies, CSRF tokens, etc.) to a third-party service.

Given this is an authenticated proxy, you should explicitly allowlist headers instead of pass-through, or at least strip sensitive ones (cookie, authorization, x-forwarded-* as needed).

Suggestion

Switch to an allowlist strategy. Example:

const headers = new Headers();
headers.set("x-api-key", apiKey);
headers.set("accept", request.headers.get("accept") ?? "application/json");
headers.set("content-type", request.headers.get("content-type") ?? "application/json");

Or minimally strip sensitive headers:

headers.delete("cookie");
headers.delete("authorization");

Reply with "@CharlieHelps yes please" if you'd like me to add a commit implementing a safe header allowlist for the proxy.

Comment on lines 97 to 124
const { thread } = useTambo();
const {
suggestions: generatedSuggestions,
selectedSuggestionId,
accept,
generateResult: { isPending: isGenerating, error },
} = useTamboSuggestions({ maxSuggestions });

// Combine initial and generated suggestions, but only use initial ones when thread is empty
const [selectedSuggestionId, setSelectedSuggestionId] =
React.useState<string | null>(null);
const { setValue: setInputValue } = useTamboThreadInput();

const accept = React.useCallback(
async ({ suggestion }: { suggestion: Suggestion }) => {
setInputValue(suggestion.detailedSuggestion);
setSelectedSuggestionId(suggestion.id);
},
[setInputValue],
);

const isGenerating = false;
const error: Error | null = null;

// Only use pre-seeded suggestions when thread is empty.
const suggestions = React.useMemo(() => {
// Only use pre-seeded suggestions if thread is empty
if (!thread?.messages?.length && initialSuggestions.length > 0) {
return initialSuggestions.slice(0, maxSuggestions);
}
// Otherwise use generated suggestions
return generatedSuggestions;
return [];
}, [
thread?.messages?.length,
generatedSuggestions,
initialSuggestions,
maxSuggestions,
]);
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

message-suggestions.tsx removes useTamboSuggestions and hard-codes isGenerating=false, error=null, and returns [] suggestions once the thread has messages.

That effectively disables generated suggestions entirely after the first message. If that’s intended as a temporary workaround, it should be explicit; otherwise it’s a functional regression (UI will show nothing where it previously generated suggestions).

Suggestion

If you meant to keep suggestions generation, reintroduce useTamboSuggestions (or a replacement) and only fall back to initialSuggestions when the thread is empty.

If you meant to disable suggestions for now, gate the UI behind a prop/feature flag and remove dead state (isGenerating, error) to reduce misleading code.

Reply with "@CharlieHelps yes please" if you'd like me to add a commit that restores generated suggestions while still supporting pre-seeded initialSuggestions.

Comment on lines +117 to +121
-- One-time legacy cleanup
delete from public.threads where user_id is null;

delete from public.messages
where thread_id not in (select id from public.threads);
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration includes destructive "one-time legacy cleanup":

delete from public.threads where user_id is null;
delete from public.messages where thread_id not in (select id from public.threads);

This is dangerous in real deployments:

  • it irreversibly deletes data on migration apply
  • it can delete data unexpectedly if the schema is applied to an environment with partially migrated/seeded data
  • threads.user_id is not null in this same migration, so user_id is null should never be true for newly created rows; this cleanup is either obsolete or indicates pre-existing inconsistent schema/data.

Destructive data cleanup should be a separate, explicitly run script (or at least guarded behind checks).

Suggestion

Remove these deletes from the migration and handle legacy cleanup in a separate, manually-invoked script (or a guarded DO block that verifies you’re in a dev environment).

For example, create supabase/scripts/legacy_cleanup.sql and document it, or wrap in something like:

-- only in non-production, if you have a reliable marker
DO $$
BEGIN
  IF current_setting('app.env', true) = 'development' THEN
    DELETE ...;
  END IF;
END $$;

Reply with "@CharlieHelps yes please" if you'd like me to add a commit that removes the destructive cleanup from the migration and adds a separate cleanup script + README note.

@charliecreates charliecreates bot changed the title Ai 2 work on issue 11 comment workflow feat(chat): per-user thread isolation Feb 7, 2026
@charliecreates charliecreates bot marked this pull request as ready for review February 7, 2026 19:04
Copy link
Author

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Key security/correctness gaps remain: proxyToTambo() currently forwards sensitive headers (cookie/authorization) upstream, and the SSE response framing is not spec-compliant (missing \n\n event delimiters) with a brittle line parser (trim() + no multi-line support). handleAdvanceStream() persists the user message before the upstream call, which can leave dangling state on upstream failure. Finally, the migration contains destructive deletes that should not run as part of normal schema migration application.

Summary of changes

Summary of changes

Proxying Tambo through authenticated server routes

  • Updated TamboProvider usage to route requests via tamboUrl="/api/tambo" and set apiKey="unused" in:
    • README.md
    • src/app/interactables/page.tsx
    • new src/app/chat/chat-client.tsx
  • Switched to a server-only secret by replacing NEXT_PUBLIC_TAMBO_API_KEY with TAMBO_API_KEY in example.env.local.

New authenticated API surface for threads/messages

  • Added src/app/api/tambo/[...path]/route.ts implementing:
    • per-user thread operations (list/retrieve/update/generate-name/cancel/delete)
    • per-thread messages endpoints (GET/POST /threads/:id/messages, PUT /threads/:id/messages/:messageId/component-state)
    • SSE /threads/advancestream pass-through that persists streamed messages to Supabase
    • a generic proxy fallback to upstream Tambo for unhandled paths

UI/auth wiring changes

  • Split chat page into a server component (src/app/chat/page.tsx) that enforces auth + TAMBO_API_KEY presence, rendering a new client component src/app/chat/chat-client.tsx.
  • Expanded Supabase auth middleware matching to cover /chat/:path* and /api/tambo/:path*.

Supabase schema + RLS

  • Added migrations:
    • supabase/migrations/20260207_per_user_threads.sql (tables, trigger, RLS select/insert/update policies + legacy cleanup deletes)
    • supabase/migrations/20260207_per_user_threads_delete_policies.sql (delete policies + repeats other policies)

Suggestions UX change

  • src/components/tambo/message-suggestions.tsx removed generated suggestions (useTamboSuggestions) and now only shows pre-seeded suggestions when a thread is empty; after that it returns [] and uses useTamboThreadInput to set the input value on accept.

@charliecreates charliecreates bot removed the request for review from CharlieHelps February 7, 2026 19:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Task: Implement Per-User Thread Isolation (Supabase + Tambo)

2 participants